|
Proximal gradient (forward backward splitting) methods for learning is an area of research in optimization and statistical learning theory which studies algorithms for a general class of convex regularization problems where the regularization penalty may not be differentiable. One such example is regularization (also known as Lasso) of the form : Proximal gradient methods offer a general framework for solving regularization problems from statistical learning theory with penalties that are tailored to a specific problem application. Such customized penalties can help to induce certain structure in problem solutions, such as ''sparsity'' (in the case of lasso) or ''group structure'' (in the case of group lasso). == Relevant background == Proximal gradient methods are applicable in a wide variety of scenarios for solving convex optimization problems of the form : is some set, typically a Hilbert space. The usual criterion of minimizes if and only if in the convex, differentiable setting is now replaced by : where denotes the subdifferential of a real-valued, convex function . Given a convex function an important operator to consider is its proximity operator defined by : which is well-defined because of the strict convexity of the norm. The proximity operator can be seen as a generalization of a projection.〔〔 We see that the proximity operator is important because is a minimizer to the problem where is any positive real number.〔 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Proximal gradient methods for learning」の詳細全文を読む スポンサード リンク
|